226 research outputs found

    Self-normalized Cram\'{e}r type moderate deviations for the maximum of sums

    Full text link
    Let X1,X2,...X_1,X_2,... be independent random variables with zero means and finite variances, and let Sn=i=1nXiS_n=\sum_{i=1}^nX_i and Vn2=i=1nXi2V^2_n=\sum_{i=1}^nX^2_i. A Cram\'{e}r type moderate deviation for the maximum of the self-normalized sums max1knSk/Vn\max_{1\leq k\leq n}S_k/V_n is obtained. In particular, for identically distributed X1,X2,...,X_1,X_2,..., it is proved that P(max1knSkxVn)/(1Φ(x))2P(\max_{1\leq k\leq n}S_k\geq xV_n)/(1-\Phi (x))\rightarrow2 uniformly for 0<xo(n1/6)0<x\leq\mathrm{o}(n^{1/6}) under the optimal finite third moment of X1X_1.Comment: Published in at http://dx.doi.org/10.3150/12-BEJ415 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    On non-stationary threshold autoregressive models

    Full text link
    In this paper we study the limiting distributions of the least-squares estimators for the non-stationary first-order threshold autoregressive (TAR(1)) model. It is proved that the limiting behaviors of the TAR(1) process are very different from those of the classical unit root model and the explosive AR(1).Comment: Published in at http://dx.doi.org/10.3150/10-BEJ306 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    CLIPVG: Text-Guided Image Manipulation Using Differentiable Vector Graphics

    Full text link
    Considerable progress has recently been made in leveraging CLIP (Contrastive Language-Image Pre-Training) models for text-guided image manipulation. However, all existing works rely on additional generative models to ensure the quality of results, because CLIP alone cannot provide enough guidance information for fine-scale pixel-level changes. In this paper, we introduce CLIPVG, a text-guided image manipulation framework using differentiable vector graphics, which is also the first CLIP-based general image manipulation framework that does not require any additional generative models. We demonstrate that CLIPVG can not only achieve state-of-art performance in both semantic correctness and synthesis quality, but also is flexible enough to support various applications far beyond the capability of all existing methods.Comment: 8 pages, 10 figures, AAAI202

    Noisy Knowledge Graph Representation Learning: a Rule-Enhanced Method

    Get PDF
    Knowledge graphs are used to store structured facts, which are presented in the form of triples, i.e., (head entity, relation, tail entity). Current large-scale knowledge graphs are usually constructed with (semi-) automated methods for knowledge extraction and the process inevitably introduces noise, which may affect the effectiveness of the knowledge representation. However, most traditional representation learning methods assume that the triples in knowledge graphs are correct and represent knowledge in a distributed manner accordingly. Therefore, noise detection on knowledge graphs is a crucial task. In addition, the incompleteness of knowledge graphs has also attracted people’s attention. The above problems are studied and a knowledge representation learning framework combining logical rules and relation path information is proposed, which accomplishes knowledge representation learning and achieves a mutual enhancement effect while detecting possible noise. Specifically, the framework is divided into a triple embedding part and a triple trustworthiness estimation part. In the triple embedding part, relation path information and logical rule information are introduced to construct a better knowledge representation based on the triple structure information, the latter of which is used to enhance the ability of relation path reasoning and the interpretability of the representation learning. In the triple trustworthiness estimation part, three types of information are further utilized to detect possible noise. Experiments are conducted on three public evaluated datasets and the results show that the model achieves significant performance improvement in tasks such as knowledge graph noise detection and knowledge complementation compared with all baseline methods

    Protecting browsers from dns rebinding attacks

    Full text link
    DNS rebinding attacks subvert the same-origin policy of browsers, converting them into open network proxies. Using DNS rebinding, an attacker can circumvent organizational and personal firewalls, send spam email, and defraud pay-per-click advertisers. We evaluate the cost effectiveness of mounting DNS rebinding attacks, finding that an attacker requires less than $100 to hijack 100,000 IP addresses. We analyze defenses to DNS rebinding attacks, including improvements to the classic “DNS pinning, ” and recommend changes to browser plug-ins, firewalls, and Web servers. Our defenses have been adopted by plug-in vendors and by a number of open-source firewall implementations
    corecore